perm filename TIMING.MSG[TIM,LSP]3 blob sn#573522 filedate 1981-03-17 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00060 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00009 00002	∂27-Feb-81  1334	Deutsch at PARC-MAXC 	Re: Timings  
C00011 00003	∂27-Feb-81  1342	Dick Gabriel <RPG at SU-AI> 	Timings    
C00013 00004	∂27-Feb-81  1354	RPG  	Timings  
C00015 00005	∂27-Feb-81  1412	Bruce E. Edwards <BEE at MIT-AI> 	Re: timings
C00017 00006	∂27-Feb-81  1427	Deutsch at PARC-MAXC 	Re: Timings  
C00018 00007	∂27-Feb-81  1502	Deutsch at PARC-MAXC 	Re: Timings  
C00020 00008	∂27-Feb-81  1533	Dick Gabriel <RPG at SU-AI> 	Timings    
C00022 00009	∂27-Feb-81  1616	Earl A. Killian <EAK at MIT-MC> 	Timings     
C00023 00010	∂27-Feb-81  1615	George J. Carrette <GJC at MIT-MC> 	timings  
C00024 00011	∂27-Feb-81  1655	David.Neves at CMU-10A 	Re: Timings
C00025 00012	∂27-Feb-81  1658	David.Neves at CMU-10A 	Re: Timings
C00026 00013	∂27-Feb-81  1710	CSVAX.fateman at Berkeley 	Timings 
C00027 00014	∂27-Feb-81  1719	CSVAX.fateman at Berkeley 	Timings 
C00028 00015	∂27-Feb-81  1730	CSVAX.fateman at Berkeley 	timings 
C00030 00016	∂27-Feb-81  1947	George J. Carrette <GJC at MIT-MC> 	Timings  
C00032 00017	∂27-Feb-81  2002	Howard I. Cannon <HIC at MIT-MC> 	Timings    
C00033 00018	∂27-Feb-81  2008	GYRO at MIT-ML (Scott W. Layson) 	Lisp timings    
C00035 00019	∂27-Feb-81  2048	PDL at MIT-DMS (P. David Lebling) 	[Re: Timings  ]
C00036 00020	∂27-Feb-81  2057	JONL at MIT-MC (Jon L White) 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch    
C00043 00021	∂27-Feb-81  2117	Howard I. Cannon <HIC at MIT-MC> 	Timings for LISP benchmarks    
C00044 00022	∂27-Feb-81  2131	CWH at MIT-MC (Carl W. Hoffman) 	Timings     
C00045 00023	∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	here's a test for you to look at/ distribute    
C00054 00024	∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch  
C00055 00025	∂28-Feb-81  0916	NEDHUE at MIT-AI (Edmund M. Goodhue) 	Timings     
C00056 00026	∂28-Feb-81  1046	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
C00057 00027	∂28-Feb-81  1109	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
C00058 00028	∂28-Feb-81  1424	Deutsch at PARC-MAXC 	Re: Timings for LISP benchmarks, and reminder of a proposal by 
C00059 00029	∂28-Feb-81  1718	YONKE at BBND 	JONL's message concerning benchmarks    
C00060 00030	∂28-Feb-81  1818	CSVAX.fateman at Berkeley 	why I excluded GC times
C00062 00031	∂28-Feb-81  2014	Guy.Steele at CMU-10A 	Re: Timings 
C00064 00032	∂28-Feb-81  2016	Scott.Fahlman at CMU-10A 	benchmarks    
C00065 00033	∂01-Mar-81  0826	PLATTS at WHARTON-10 ( Steve Platt) 	timing for lisp   
C00066 00034	∂01-Mar-81  1300	RJF at MIT-MC (Richard J. Fateman) 	more lisp mavens   
C00067 00035	∂02-Mar-81  0443	Robert H. Berman <RHB at MIT-MC> 	Timings    
C00068 00036	∂02-Mar-81  0543	Robert H. Berman <RHB at MIT-MC> 	Timings    
C00069 00037	∂02-Mar-81  0741	James E. O'Dell <JIM at MIT-MC> 	Timings
C00072 00038	∂02-Mar-81  1006	Deutsch at PARC-MAXC 	Re: Timings  
C00073 00039	∂02-Mar-81  1312	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
C00074 00040	∂02-Mar-81  1634	RPG  	Lisp Timings  
C00079 00041	∂03-Mar-81  1524	RPG  	Lisp Timing Mailing List
C00081 00042	Here's the first message, which you missed:
C00086 00043	∂04-Mar-81  0449	Robert H. Berman <RHB at MIT-MC> 	Lisp Timing Mailing List  
C00091 00044	∂04-Mar-81  0957	Scott.Fahlman at CMU-10A 	Re: Translators    
C00094 00045	∂04-Mar-81  0959	CSVAX.char at Berkeley 	lisp benchmarking    
C00097 00046	∂04-Mar-81  1627	HEDRICK at RUTGERS 	sometime of possible interest 
C00102 00047	∂06-Mar-81  1301	HES at MIT-AI (Howard Shrobe) 	Methodology considerations:  
C00104 00048	Subject: Lisp Timings Group
C00111 00049	∂10-Mar-81  0727	correira at UTEXAS-11  	lisp timings    
C00113 00050	∂03-Mar-81  2109	Barrow at SRI-KL (Harry Barrow ) 	Lisp Timings    
C00116 00051	∂02-Mar-81  0004	Charles Frankston <CBF at MIT-MC> 	timings   
C00120 00052	∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
C00124 00053	∂16-Mar-81  1429	HEDRICK at RUTGERS 	Re: Solicitation    
C00129 00054	∂16-Mar-81  1433	HEDRICK at RUTGERS 	Re: GC    
C00136 00055	∂16-Mar-81  1810	Scott.Fahlman at CMU-10A 	Re: GC   
C00138 00056	∂16-Mar-81  1934	PLATTS at WHARTON-10 ( Steve Platt) 	lisp -- my GC and machine specs  
C00142 00057	∂17-Mar-81  0745	Griss at UTAH-20 (Martin.Griss) 	Re: GC      
C00143 00058	∂17-Mar-81  0837	Robert S. Boyer <BOYER at SRI-CSL> 	Solicitation  
C00147 00059	∂17-Mar-81  0847	Robert S. Boyer <BOYER at SRI-CSL> 	LISP Timings  
C00149 00060	∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
C00153 ENDMK
C⊗;
∂27-Feb-81  1334	Deutsch at PARC-MAXC 	Re: Timings  
Date: 27 Feb 1981 13:32 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1319-PST
To: Dick Gabriel <RPG at SU-AI>
cc: info-lispm at MIT-AI

Your suggestion sounds great.  What we need is someone to organize the process
just a little.  Such a person would do something like the following:

1) Collect the names of volunteers or contact persons at each site, to send sample
programs to.
2) Collect the sample programs from each site, and disseminate them to the
volunteers or contacts at the other sites.
3) Collect the translated sample programs (in case there was controversy over
whether the translation was "trivial", for example, and for documentation and
historical purposes).
4) Collect the results of the timings run at each site, and disseminate them.

Would you like to volunteer?

∂27-Feb-81  1342	Dick Gabriel <RPG at SU-AI> 	Timings    
Date: 27 Feb 1981 1319-PST
From: Dick Gabriel <RPG at SU-AI>
Subject: Timings  
To:   deutsch at PARC-MAXC
CC:   info-lispm at MIT-AI    

	Since everyone I know of is trying to make a decision about what
to do about Lisp computing in the next five years, perhaps we should
try to co-ordinate a test that will help everyone make a decision.
For instance, each center (PARC, MIT, Stanford, CMU, Berkeley,...)
can provide a program that is of interest to them (not too big, of course);
each test site will then provide someone to re-code (in a very trivial sense:
turning greaterp into >, adding declarations) those programs into reasonably
efficient code for their system. The authors will provide timing data and
timing points in their code.

	Each center may have a few programs since they may have diverse
communities (SAIL and HPP at Stanford). I would be happy to volunteer to
test programs for SAIL MacLisp, which is a 10 version.
			-rpg-



∂27-Feb-81  1354	RPG  	Timings  
To:   deutsch at PARC-MAXC
CC:   RPG at SU-AI, info-lispm at MIT-AI
I will volunteer to co-ordinate the Lisp timing test. I plan to contact:

	Deutsch/Masinter at Parc (InterLisp on MAXC, Dorado, Dolphin...)
	RPG/ROD at SAIL (MacLisp on SAIL, TOPS-20, FOONLY F2)
	VanMelle@SUMEX (InterLisp on TOPS-20)
	Fateman at Berkeley (FranzLisp on VAX)
	Hedrick at Rutgers (UCILISP on TOPS-10?)
	Fahlman/Steele at CMU (SPICELISP on ?, MacLisp on CMU-10)
	HIC at MIT (Lisp Machine)
	JONL at MIT (MacLisp on ITS, NIL on VAX)
	Westfold at SCI (InterLisp on F2)
	Weyhrauch at SAIL (Ilisp on SAIL, LISP1.6 on SAIL)

If anyone has any suggestions about who else to contact or other Lisps
and/or machines to try, let me know soon.

				-rpg-

∂27-Feb-81  1412	Bruce E. Edwards <BEE at MIT-AI> 	Re: timings
Date: 27 February 1981 16:32-EST
From: Bruce E. Edwards <BEE at MIT-AI>
Subject: Re: timings
To: CPR at MIT-EECS
cc: INFO-LISPM at MIT-AI, RWS at MIT-XX

As Peter Deutsch has pointed out this is a crummy benchmark, which was implemented
by relatively unenlighted programming on the CADR. I made it almost 50% faster
in 5 minutes, and the new numbers are much better. They could be made much better,
but basically people aren't interested in hacking uninteresting benchmarks. Things
like a natural language parser or an AI program is more what we are interested in.
There are some data points along this line, but I can't remember the exact numbers.
Hopefully RG has the numbers for the WOODS lunar program tucked away somewhere.

∂27-Feb-81  1427	Deutsch at PARC-MAXC 	Re: Timings  
Date: 27 Feb 1981 14:26 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1354-PST
To: Dick Gabriel <RPG at SU-AI>

Great!  Perhaps we will finally throw some light into the murk of claims and
counter-claims about Lisp speeds that have been made for many years.

You might consider sending out some kind of announcement to LISP-FORUM
and/or LISP-DISCUSSION at MIT-AI as well -- I'm not sure everyone of interest
is on INFO-LISPM.

∂27-Feb-81  1502	Deutsch at PARC-MAXC 	Re: Timings  
Date: 27 Feb 1981 13:32 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1319-PST
To: Dick Gabriel <RPG at SU-AI>
cc: info-lispm at MIT-AI

Your suggestion sounds great.  What we need is someone to organize the process
just a little.  Such a person would do something like the following:

1) Collect the names of volunteers or contact persons at each site, to send sample
programs to.
2) Collect the sample programs from each site, and disseminate them to the
volunteers or contacts at the other sites.
3) Collect the translated sample programs (in case there was controversy over
whether the translation was "trivial", for example, and for documentation and
historical purposes).
4) Collect the results of the timings run at each site, and disseminate them.

Would you like to volunteer?


∂27-Feb-81  1533	Dick Gabriel <RPG at SU-AI> 	Timings    
Date: 27 Feb 1981 1354-PST
From: Dick Gabriel <RPG at SU-AI>
Subject: Timings  
To:   deutsch at PARC-MAXC
CC:   RPG at SU-AI, info-lispm at MIT-AI

I will volunteer to co-ordinate the Lisp timing test. I plan to contact:

	Deutsch/Masinter at Parc (InterLisp on MAXC, Dorado, Dolphin...)
	RPG/ROD at SAIL (MacLisp on SAIL, TOPS-20, FOONLY F2)
	VanMelle@SUMEX (InterLisp on TOPS-20)
	Fateman at Berkeley (FranzLisp on VAX)
	Hedrick at Rutgers (UCILISP on TOPS-10?)
	Fahlman/Steele at CMU (SPICELISP on ?, MacLisp on CMU-10)
	HIC at MIT (Lisp Machine)
	JONL at MIT (MacLisp on ITS, NIL on VAX)
	Westfold at SCI (InterLisp on F2)
	Weyhrauch at SAIL (Ilisp on SAIL, LISP1.6 on SAIL)

If anyone has any suggestions about who else to contact or other Lisps
and/or machines to try, let me know soon.

				-rpg-



∂27-Feb-81  1616	Earl A. Killian <EAK at MIT-MC> 	Timings     
Date: 27 February 1981 19:16-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI

I've got a queuing simulation program in MC:EAK;SIMUL > that
while it isn't at all typical of AI, uses an interesting mix of
list and numeric computation, and also runs for a fair time while
being not overly long.  I'm not sure whether its useful to you,
but if it is, let me know.

∂27-Feb-81  1615	George J. Carrette <GJC at MIT-MC> 	timings  
Date: 27 February 1981 17:35-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  timings
To: Deutsch at PARC-MAXC
cc: INFO-LISPM at MIT-MC, masinter at PARC-MAXC, guttag at MIT-XX,
    RWS at MIT-XX

How about using Macsyma? It has some interesting programs in it,
and it has given the Lispmachine quite a work-out on some large
real problems (or did the Lispmachine give macsyma a work out?).

-gjc


∂27-Feb-81  1655	David.Neves at CMU-10A 	Re: Timings
Date: 27 February 1981 1954-EST (Friday)
From: David.Neves at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Timings
In-Reply-To:  Dick Gabriel's message of 27 Feb 81 16:54-EST
Message-Id: <27Feb81 195427 DN10@CMU-10A>

why not also try TLC lisp on a micro.  ask jra@sail
also BBN's jerico might be relevant but i don't think they
	have a lisp for it.

∂27-Feb-81  1658	David.Neves at CMU-10A 	Re: Timings
Date: 27 February 1981 1957-EST (Friday)
From: David.Neves at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Timings
In-Reply-To:  Dick Gabriel's message of 27 Feb 81 16:54-EST
Message-Id: <27Feb81 195751 DN10@CMU-10A>

p.s.  also i believe people at BBN are trying to put Interlisp on
 a Prime computer.  If they do have a version up that would be a
 another data point.  i don't know who you would contact though.

∂27-Feb-81  1710	CSVAX.fateman at Berkeley 	Timings 
Date: 27 Feb 1981 16:20:26-PST
From: CSVAX.fateman at Berkeley
To: RPG@SU-AI, deutsch@PARC-MAXC
Subject: Timings
Cc: info-lispm@mit-ai

add Griss@utah-20 (standard lisp on 10, b-1700, ...)

∂27-Feb-81  1719	CSVAX.fateman at Berkeley 	Timings 
Date: 27 Feb 1981 16:22:33-PST
From: CSVAX.fateman at Berkeley
To: RPG@SU-AI, deutsch@PARC-MAXC
Subject: Timings
Cc: info-lispm@mit-ai

add Griss@utah-20 (standard lisp on 10, b-1700, ...)

∂27-Feb-81  1730	CSVAX.fateman at Berkeley 	timings 
Date: 27 Feb 1981 16:43:27-PST
From: CSVAX.fateman at Berkeley
To: Deutsch@PARC-MAXC, GJC@MIT-MC
Subject: timings
Cc: INFO-LISPM@MIT-MC, RWS@MIT-XX, CSVAX.fateman@Berkeley, guttag@MIT-XX, masinter@PARC-MAXC

George: are you offering to put Macsyma up in Interlisp?  We already
have some LM /KL-10/ VAX-11/780 benchmarks (KL-10 maclisp):
Vaxima and Lisp Machine timings for DEMO files
(fg genral, fg rats, gen demo, begin demo)
(garbage collection times excluded.)  Times in seconds.

MC        VAXIMA     128K lm     192K lm    256K lm VAXIMA Jul 80
4.119	   17.250   43.333      19.183     16.483    15.750
2.639	    7.016   55.916      16.416     13.950  
3.141	   10.850  231.516      94.933     58.166   
4.251	   16.700  306.350     125.666     90.716    12.400

(Berkeley CS.VAX 11/780 UNIX April 8, 1980,  KL-10 MIT-MC ITS April 9, 1980.)

improvements due to expanding alike1 and a few odds and ends as macros;
also some improvements in the compiler.


∂27-Feb-81  1947	George J. Carrette <GJC at MIT-MC> 	Timings  
Date: 27 February 1981 22:47-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI
cc: deutsch at PARC-MAXC

I have a usefull benchmark which I just tried in Maclisp at MIT-MC
and on a LISPM. It is code which does line-drawing window-clipping 
for arbitrary convex polygonal regions. This code is in actual use.
If you want to see it, it is on MIT-MC in
[MC:BLIS11;CLIP >]. (yes, I hack BLISS. (wow what a compiler!))
It is a nice example because it tests the speed of the FUNCALL dispatch.
The file is conditionalized to run in either LISPM or Maclisp, and
even includes the timing methods used. I would very much like it
if I could run the same (*exactly*) conditionalized source on
N different systems, that way I would have
(1) greater confidence
(2) an exact knowledged of how things are done differently on the
    different systems. e.g. how much hair one has to go through to
    declare things to the compiler.

-gjc

∂27-Feb-81  2002	Howard I. Cannon <HIC at MIT-MC> 	Timings    
Date: 27 February 1981 23:02-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI

I'll be happy to do the timing tests.
--Howard

∂27-Feb-81  2008	GYRO at MIT-ML (Scott W. Layson) 	Lisp timings    
Date: 27 FEB 1981 2306-EST
From: GYRO at MIT-ML (Scott W. Layson)
Subject: Lisp timings
To: rpg at SU-AI
CC: GYRO at MIT-ML, INFO- at MIT-ML, INFO-LISPM at MIT-ML

I know this is a little silly, but if you have any REALLY tiny
benchmarks (space-wise) I would like to try them out in TLC-Lisp
and muLisp, both running on a 64K Z-80.  These Lisps don't page,
so the program and data have to fit in small real memory.
(Perhaps I should call them "Lisplets"?)

Incidentally, it seems to me that GC time should be included in
the times reported.  Different systems generate garbage at
different rates and deal with it at different efficiencies,
and this shows up in the user-response time of the systems
(which is, after all, what we really want to know).

-- Scott Layson
---------------

∂27-Feb-81  2048	PDL at MIT-DMS (P. David Lebling) 	[Re: Timings  ]
Date: 27 Feb 1981 2348-EST
From: PDL at MIT-DMS (P. David Lebling)
To: rpg at SU-AI
In-reply-to: Message of 27 Feb 81 at 1354 PST by RPG@SU-AI
Subject: [Re: Timings  ]
Message-id: <[MIT-DMS].187847>

You should contact either CLR@MIT-XX or myself for Muddle.
	Dave


∂27-Feb-81  2057	JONL at MIT-MC (Jon L White) 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch    
Date: 27 FEB 1981 2352-EST
From: JONL at MIT-MC (Jon L White)
Subject: Timings for LISP benchmarks, and reminder of a proposal by Deutsch
To: rpg at SU-AI
CC: LISP-DISCUSSION at MIT-MC, BEE at MIT-AI, JHL at MIT-AI
CC: CSVAX.fateman at BERKELEY, RWS at MIT-XX

I notice you sent your proposal to INFO-LISPM, and thought that the
LISP-DISCUSSION community might want to be aware of it too.  (Deutsch and
Masinter are, I believe, on this list, as is Griss).
    Date: 27 Feb 1981 1354-PST
    From: Dick Gabriel <RPG at SU-AI>
    I will volunteer to co-ordinate the Lisp timing test. I plan to contact:
	    Deutsch/Masinter at Parc (InterLisp on MAXC, Dorado, Dolphin...)
	    RPG/ROD at SAIL (MacLisp on SAIL, TOPS-20, FOONLY F2)
	    VanMelle@SUMEX (InterLisp on TOPS-20)
	    Fateman at Berkeley (FranzLisp on VAX)
	    Hedrick at Rutgers (UCILISP on TOPS-10?)
	    Fahlman/Steele at CMU (SPICELISP on ?, MacLisp on CMU-10)
	    HIC at MIT (Lisp Machine)
	    JONL at MIT (MacLisp on ITS, NIL on VAX)
	    Westfold at SCI (InterLisp on F2)
	    Weyhrauch at SAIL (Ilisp on SAIL, LISP1.6 on SAIL)
    If anyone has any suggestions about who else to contact or other Lisps
    and/or machines to try, let me know soon.
The contact for Rutgers-LISP should probably be JOSH@RUTGERS-10
(John Storrs Hall) who is actively extending the formerly-called UCILISP.
Fateman's login name is   CSVAX.fateman@Berkeley   unless there is some 
smarts to his mailer that I don't know about.
Also, I'd like to suggest the following additions
  GRISS@UTAH-20  for "STANDARD-LISP" on PDP10, IBM370, etc
  John Allen (who knows where?) for his "Cromemco" lisp on Z80 etc
  JHL@MIT-AI (Joachim Laubsch, from Stuttgart, West Germany) who might be 
             able to involve the European LISP community.

    I'll also send a letter of these actions to Shigeki Goto of the Nippon 
Telephone Co. in Tokyo, who generated some sort of flurry last fall with his 
incrediblly-simple "benchmark" function TAK.  Actually, TAK may be useful as 
one part of a multi-foliate benchmark, since it specifically test timings 
for (1) function-to-function interface, and (2) simple arithmetic of GREATERP 
and SUB1.  Some of Baskett's benchmarkings score heavily on the array
capabilities, for which FORTRAN compilers "come off smelling like a rose",
and even the fast-arithmetic of MacLISP savors like a garbage dump.

   At the little "lisp discussion" held in Salt Lake City, December 1980,
(attendees partly co-incident with LISP-DISCUSSION mailing list), Peter 
Deutsch made a suggestion which we all liked, but for which there
has been no subsequent action (to my knowledge).  Basically, in order to
educate ourselves into the consequences of the differences between LISP
dialects, and to get some experience in converting "real" code, each
dialect community should nominate a representative piece of "useful code" 
from its enviromment, and the groups responsible for the other
dialects would try to "transport" it into their own.  Several benefits
should accrue:
  (1) If the "representative" is some useful piece of the general environment, 
      say like the DEFMACRO "cache'ing" scheme of MacLISP/NIL, or the
      Interlisp Masterscope, or whatever, then the "transportation" cost 
      will be repaid by having a useful new tool in the other dialects.
  (2) We should accumulate a library of automatic conversion tools, or
      at least of written reports on the problems involved.
  (3) Each community may be affected in a way which (hopefully) will help 
      reduce the hard-core interdialect incompatibilities.
(Apologies to Deutsch for any garbling of the proposal content).

∂27-Feb-81  2117	Howard I. Cannon <HIC at MIT-MC> 	Timings for LISP benchmarks    
Date: 28 February 1981 00:17-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Timings for LISP benchmarks
To: rpg at SU-AI, deutsch at PARC-MAXC
cc: Greenberg.Symbolics at MIT-MULTICS

I suggest Greenberg.Symbolics@MIT-MULTICS for Multics MacLisp.

∂27-Feb-81  2131	CWH at MIT-MC (Carl W. Hoffman) 	Timings     
Date: 28 FEB 1981 0030-EST
From: CWH at MIT-MC (Carl W. Hoffman)
Subject: Timings  
To: RPG at SU-AI

    Date: 27 Feb 1981 1354-PST
    From: Dick Gabriel <RPG at SU-AI>

    If anyone has any suggestions about who else to contact or other Lisps
    and/or machines to try, let me know soon.

    				-rpg-

You might also contact Richard Lamson or Bernie Greenberg for timings of
MacLisp on various Multics sites.  Net addresses are "Lamson at MIT-Multics"
and "Greenberg at MIT-Multics".

∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	here's a test for you to look at/ distribute    
Date: 27 Feb 1981 21:26:56-PST
From: CSVAX.fateman at Berkeley
To: rpg@su-ai
Subject: here's a test for you to look at/ distribute


;; test from Berkeley based on polynomial arithmetic.

(declare (special ans coef f inc i k qq ss v *x*
		    *alpha *a* *b* *chk *l *p q* u* *var *y*))
(declare (localf pcoefadd pcplus pcplus1 pplus ptimes ptimes1
		 ptimes2 ptimes3 psimp pctimes pctimes1
		 pplus1))
;; Franz uses maclisp hackery here; you can rewrite lots of ways.
(defmacro pointergp (x y) `(> (get ,x 'order)(get ,y 'order)))

(defmacro pcoefp (e) `(atom ,e))
(defmacro pzerop (x) `(signp e ,x))			;true for 0 or 0.0
(defmacro pzero () 0)
(defmacro cplus (x y) `(plus ,x ,y))
(defmacro ctimes (x y) `(times ,x ,y))


(defun pcoefadd (e c x) (cond ((pzerop c) x)
			      (t (cons e (cons c x)))))

(defun pcplus (c p) (cond ((pcoefp p) (cplus p c))
			  (t (psimp (car p) (pcplus1 c (cdr p))))))

(defun pcplus1 (c x)
       (cond ((null x)
	      (cond ((pzerop c) nil) (t (cons 0 (cons c nil)))))
	     ((pzerop (car x)) (pcoefadd 0 (pplus c (cadr x)) nil))
	     (t (cons (car x) (cons (cadr x) (pcplus1 c (cddr x)))))))
	 
(defun pctimes (c p) (cond ((pcoefp p) (ctimes c p))
			   (t (psimp (car p) (pctimes1 c (cdr p))))))

(defun pctimes1 (c x)
       (cond ((null x) nil)
	     (t (pcoefadd (car x)
			  (ptimes c (cadr x))
			  (pctimes1 c (cddr x))))))

(defun pplus (x y) (cond ((pcoefp x) (pcplus x y))
			 ((pcoefp y) (pcplus y x))
			 ((eq (car x) (car y))
			  (psimp (car x) (pplus1 (cdr y) (cdr x))))
			 ((pointergp (car x) (car y))
			  (psimp (car x) (pcplus1 y (cdr x))))
			 (t (psimp (car y) (pcplus1 x (cdr y))))))

(defun pplus1 (x y)
       (cond ((null x) y)
	     ((null y) x)
	     ((= (car x) (car y))
	      (pcoefadd (car x)
			(pplus (cadr x) (cadr y))
			(pplus1 (cddr x) (cddr y))))
	     ((> (car x) (car y))
	      (cons (car x) (cons (cadr x) (pplus1 (cddr x) y))))
	     (t (cons (car y) (cons (cadr y) (pplus1 x (cddr y)))))))

(defun psimp (var x)
       (cond ((null x) 0)
	     ((atom x) x)
	     ((zerop (car x)) (cadr x))
	      (t (cons var x))))

(defun ptimes (x y) (cond ((or (pzerop x) (pzerop y)) (pzero))
			  ((pcoefp x) (pctimes x y))
			  ((pcoefp y) (pctimes y x))
			  ((eq (car x) (car y))
			   (psimp (car x) (ptimes1 (cdr x) (cdr y))))
			  ((pointergp (car x) (car y))
			   (psimp (car x) (pctimes1 y (cdr x))))
			  (t (psimp (car y) (pctimes1 x (cdr y))))))

(defun ptimes1 (*x* y) (prog (u* v)
			       (setq v (setq u* (ptimes2 y)))
			  a    (setq *x* (cddr *x*))
			       (cond ((null *x*) (return u*)))
			       (ptimes3 y)
			       (go a)))

(defun ptimes2 (y) (cond ((null y) nil)
			 (t (pcoefadd (plus (car *x*) (car y))
				      (ptimes (cadr *x*) (cadr y))
				      (ptimes2 (cddr y))))))

(defun ptimes3 (y) 
  (prog (e u c) 
     a1 (cond ((null y) (return nil)))
	(setq e (+ (car *x*) (car y)))
	(setq c (ptimes (cadr y) (cadr *x*) ))
	(cond ((pzerop c) (setq y (cddr y)) (go a1))
	      ((or (null v) (> e (car v)))
	       (setq u* (setq v (pplus1 u* (list e c))))
	       (setq y (cddr y)) (go a1))
	      ((= e (car v))
	       (setq c (pplus c (cadr v)))
	       (cond ((pzerop c) (setq u* (setq v (pdiffer1 u* (list (car v) (cadr v))))))
		     (t (rplaca (cdr v) c)))
	       (setq y (cddr y))
	       (go a1)))
     a  (cond ((and (cddr v) (> (caddr v) e)) (setq v (cddr v)) (go a)))
	(setq u (cdr v))
     b  (cond ((or (null (cdr u)) (< (cadr u) e))
	       (rplacd u (cons e (cons c (cdr u)))) (go e)))
	(cond ((pzerop (setq c (pplus (caddr u) c))) (rplacd u (cdddr u)) (go d))
	      (t (rplaca (cddr u) c)))
     e  (setq u (cddr u))
     d  (setq y (cddr y))
	(cond ((null y) (return nil)))
	(setq e (+ (car *x*) (car y)))
	(setq c (ptimes (cadr y) (cadr *x*)))
     c  (cond ((and (cdr u) (> (cadr u) e)) (setq u (cddr u)) (go c)))
	(go b))) 





!
















(defun pexptsq (p n)
	(do ((n (quotient n 2) (quotient n 2))
	     (s (cond ((oddp n) p) (t 1))))
	    ((zerop n) s)
	    (setq p (ptimes p p))
	    (and (oddp n) (setq s (ptimes s p))) ))



(defun setup nil
  (putprop 'x 1 'order)
  (putprop 'y 2 'order)
  (putprop 'z 3 'order)
  (setq r (pplus '(x 1 1 0 1) (pplus '(y 1 1) '(z 1 1)))) ; r= x+y+z+1
  (setq r2 (ptimes r 100000)) ;r2 = 100000*r
  (setq r3 (ptimes r 1.0)); r3 = r with floating point coefficients
  )
; time various computations of powers of polynomials, not counting
;printing but including gc time ; provide account of g.c. time.

; The following function uses (ptime) for process-time and is thus
;  Franz-specific.

(defun bench (n)
  (setq start (ptime)) ;  Franz ticks, 60 per sec, 2nd number is GC
  (pexptsq r n) 
  (setq res1 (ptime))
  (pexptsq r2 n)
  (setq res2 (ptime))
  ; this one requires bignums.
  (pexptsq r3 n)
  (setq res3 (ptime))
  (list 'power=  n (b1 start res1)(b1 res1 res2)(b1 res2 res3)))
(defun b1(x y)(mapcar '(lambda(r s)(quotient (- s r) 60.0)) x y))

;instructions:
;  after loading, type (setup)
; then (bench 2) ; this should be pretty fast.
; then (bench 5)
; then (bench 10)
; then (bench 15)
;... 

∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch  
Date: 27 Feb 1981 21:32:33-PST
From: CSVAX.fateman at Berkeley
To: JONL@MIT-MC, rpg@SU-AI
Subject: Timings for LISP benchmarks, and reminder of a proposal by Deutsch
Cc: BEE@MIT-AI, JHL@MIT-AI, LISP-DISCUSSION@MIT-MC

I have sent an entry (polynomial arithmetic system) to rpg@su-ai.
He can examine and redistribute.
  ( fateman@berkeley is equivalent to csvax.fateman@berkeley...)

∂28-Feb-81  0916	NEDHUE at MIT-AI (Edmund M. Goodhue) 	Timings     
Date: 28 FEB 1981 1215-EST
From: NEDHUE at MIT-AI (Edmund M. Goodhue)
Subject: Timings  
To: RPG at SU-AI

I suggest you add Jim Meehan at UCI (maintainer of UCI LISP) who can
run benchmarks on UCILISP and MLISP on both TOP-10 and TOPS-20.  UCI
is not on the net but he can be reached via MEEHAN@MIT-AI.

Ned Goodhue

∂28-Feb-81  1046	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
Date:     28 February 1981 1343-est
From:     Barry Margolin             <Margolin at MIT-Multics>
Subject:  Re: Timings
To:       RPG at SU-AI
Cc:       info-lispm at MIT-AI

I think you should also contact someone at MIT-Multics, where they run
MacLisp, although I'm not sure who you should contact.

∂28-Feb-81  1109	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
Date:     28 February 1981 1343-est
From:     Barry Margolin             <Margolin at MIT-Multics>
Subject:  Re: Timings
To:       RPG at SU-AI
Cc:       info-lispm at MIT-AI

I think you should also contact someone at MIT-Multics, where they run
MacLisp, although I'm not sure who you should contact.

∂28-Feb-81  1424	Deutsch at PARC-MAXC 	Re: Timings for LISP benchmarks, and reminder of a proposal by 
Date: 28 Feb 1981 14:23 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings for LISP benchmarks, and reminder of a proposal by
 Deutsch
In-reply-to: JONL's message of 27 FEB 1981 2352-EST
To: rpg at SU-AI, LISP-DISCUSSION at MIT-MC, BEE at MIT-AI, JHL at MIT-AI,
 CSVAX.fateman at BERKELEY, RWS at MIT-XX

JONL accurately represented the content of my proposal.  The set of programs
being submitted for timing tests might indeed be a useful place to start.

∂28-Feb-81  1718	YONKE at BBND 	JONL's message concerning benchmarks    
Date: 28 Feb 1981 2009-EST
Sender: YONKE at BBND
Subject: JONL's message concerning benchmarks
From: YONKE at BBND
To: RPG at SU-AI, Lisp-Discussion at MIT-MC
Message-ID: <[BBND]28-Feb-81 20:09:20.YONKE>

I'd like to add Interlisp on Jericho (our in-house machine).
Also, since BBN has several different flavors of DEC hardware
which run TOPS-20, I wouldn't mind supplying these different
timings and they would probably more informative than Kurt's
(VanMelle) from SUMEX.

Martin

∂28-Feb-81  1818	CSVAX.fateman at Berkeley 	why I excluded GC times
Date: 28 Feb 1981 17:15:23-PST
From: CSVAX.fateman at Berkeley
To: HES@MIT-AI
Subject: why I excluded GC times
Cc: CSVAX.fateman@Berkeley, info-lispm@mit-mc, lisp-discussion@mit-mc

including GC times makes for a very messy statistical situation.
GC time (or even if it happens at all) is dependent on the virtual
address space in use at the time, how much of the macsyma system
has been loaded (in the case of the KL-10), etc.  I do not know
about the LM figures, since I am only reporting stuff sent to me,
but the KL-10 and the VAX typically spend 30% additional time in
GC, averaged over various "production" runs.  Trading off GC time
for system paging time is a funny business, though I agree it
is important.


∂28-Feb-81  2014	Guy.Steele at CMU-10A 	Re: Timings 
Date: 28 February 1981 2313-EST (Saturday)
From: Guy.Steele at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Timings
In-Reply-To:  Dick Gabriel's message of 27 Feb 81 16:54-EST
Message-Id: <28Feb81 231341 GS70@CMU-10A>

You may want to get in touch with the people at Utah (Standard LISP)
for various machines, and maybe John Allen (who has implementations
for micros, for low end of curve).

Also let me note that you are likely to get a great CACM article or
soemthing out of distilling all this stuff if you want; more power
to you.  I'll coordinate running tests on SPice LISP, though that
may take some time to materialize.
--QW
xxx
--Q

∂28-Feb-81  2016	Scott.Fahlman at CMU-10A 	benchmarks    
Date: 28 February 1981 2315-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: rpg at su-ai
Subject:  benchmarks
Message-Id: <28Feb81 231549 SF50@CMU-10A>


Hi,
I just added my name to Lisp discussion recently and seem to have missed
something.  Exactly what benchmarks are you running/getting people to
run?  If there was a message that kicked all of this off, I would be
interested in seeing it.

We will be happy to add Spice Lisp on Perq benchmarks when the time comes,
but we won't be ready till summer.
-- Scott

∂01-Mar-81  0826	PLATTS at WHARTON-10 ( Steve Platt) 	timing for lisp   
Date:  1 Mar 1981 (Sunday) 1124-EDT
From: PLATTS at WHARTON-10 ( Steve Platt)
Subject: timing for lisp
To:   rpg at SU-AI

  ...if the systems are not *too* big, I'd like to try them on my micro
(Z80) lisp....  rough limits -- stack is a few hundred calls deep (I can
relink to change this if necessary), cell space is limited to roughly
10K cells.  Most basic major lisp functions (a la maclisp, for the most
part) are implemented, others can be added.
   -Steve

∂01-Mar-81  1300	RJF at MIT-MC (Richard J. Fateman) 	more lisp mavens   
Date:  1 MAR 1981 1600-EST
From: RJF at MIT-MC (Richard J. Fateman)
Subject: more lisp mavens
To: rpg at SU-AI

Try boyer@sri-kl.  They have an F2, and Boyer undoubtedly
could supply theorem-prover benchmark.

∂02-Mar-81  0443	Robert H. Berman <RHB at MIT-MC> 	Timings    
Date: 2 March 1981 07:43-EST
From: Robert H. Berman <RHB at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI
cc: deutsch at PARC-MAXC

Please add me to your timing test survey. I have several
suggestions of features that I would like to know about.

Thanks.

--Bob

∂02-Mar-81  0543	Robert H. Berman <RHB at MIT-MC> 	Timings    
Date: 2 March 1981 08:43-EST
From: Robert H. Berman <RHB at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI
cc: deutsch at PARC-MAXC

Please add me to your timing test survey. I have several
suggestions of features that I would like to know about.

Thanks.

-Bob

∂02-Mar-81  0741	James E. O'Dell <JIM at MIT-MC> 	Timings
Date: 2 March 1981 10:40-EST
From: James E. O'Dell <JIM at MIT-MC>
Subject:  Timings
To: Margolin at MIT-MULTICS
cc: RPG at SU-AI

    Date: 28 February 1981 1343-est
    From: Barry Margolin <Margolin at MIT-Multics>
    To:   RPG at SU-AI
    cc:   info-lispm at MIT-AI
    Re:   Timings

    I think you should also contact someone at MIT-Multics, where they run
    MacLisp, although I'm not sure who you should contact.

If the timings don't take too long to work up I'd be glad to run the
Multics Lisp trials. As you might know we have a Macsyma running there
now, version 293. It typically runs at .6 of a MC. The tricky thing is that
on some BIG problems it runs as fast or faster than MC because of its
larger address space. It spends less of its time collecting garbage than
on MC. I feel that this is an important factor.

At least on of the timings should CONS up a storm. We have had problems
with address space on both the LISPM and on 10's. Some large Macsyma
probems use up all of the address space on the LISPM because we don't run
with the garbage collector. GC'ing on the LISPM slows things down a lot.

I also think that the LISPM is being unfairly compared because of its
single user nature. The numbers do not accurately reflect the responsiveness
observed by the user.


∂02-Mar-81  1006	Deutsch at PARC-MAXC 	Re: Timings  
Date: 2 Mar 1981 10:06 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1354-PST
To: Dick Gabriel <RPG at SU-AI>
cc: Masinter

Please take me off the list of people doing Lisp timings.  Larry Masinter or
someone else at PARC who is actively working on Lisp (which I am not) is more
appropriate.

∂02-Mar-81  1312	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
Date:     2 March 1981 1610-est
From:     Barry Margolin             <Margolin at MIT-Multics>
Subject:  Re: Timings
To:       JIM at MIT-MC
Cc:       RPG at SU-AI

Bernie Greenberg has already been volunteered to do the Multics MacLisp
timings, although I'm sure he won't mind your help, especially when it
gets to Macsyma timings.

∂02-Mar-81  1634	RPG  	Lisp Timings  
To:   info-lispm at MIT-AI, lisp-discussion at MIT-AI,
      "#TIMING.MSG[TIM,LSP]" at SU-AI
	As most of you know, there will be an attempt made to do a
series of Lisp timings in which various benchmarks submitted by the
Lisp community are tested on a variety of different Lisp systems.
Since there will need to be some translations done in order to run
these benchmarks in systems for which they were not intended, there
is the secondary (!) problem of learning what is really needed to do
these translations more readily in the future.

	I will be co-ordinating this effort and will be distributing
the results when they are in. For this purpose I have set up 3
mailing lists at Stanford:

	LISPTIMING 	 the list of people interested in this topic
	LISPTRANSLATORS, the list of people who have volunteered
			 to do the timing tests (and translations)
			 at the various sites
	LISPSOURCES	 the list of people who will be supplying
			 benchmarks

	You can MAIL to these entities at SAIL (e.g. MAIL
LISPTIMING@SAIL...)  and thus avoid swamping the mailing lists we
have beenusing so far.

	If you care to be on one of these lists, please send me
(rpg@sail) your login name and site exactly as your mailer will
understand it along with which list you wish to be on. If you are
supplying programs or talent, please let me know which Lisp, which
machine, and which OS you are representing.

	In addition, a list of all messages pertaining to this
extravaganza will be on TIMING.MSG[TIM,LSP] at SAIL (you can
FTP from SAIL without logging in). In general, this area will
contain all of the information, programs, and results for this
project.

	If you know of anyone who is not on the net and who may be
able to help, send me a message and a method for getting in touch
with him/her. Over the next few days I hope to establish some of the
methodological considerations (such as GC times) for the project.

			Dick Gabriel	(RPG@SAIL)

∂03-Mar-81  1524	RPG  	Lisp Timing Mailing List
To:   "@LSPTIM.DIS[P,DOC]" at SU-AI   
	Welcome to the Lisp Timing mailing list. As you may have
already guessed, the scope of the Lisp Timing Evaluation project is
very large in scope, and if we are to make a contribution to the
undertanding of how to evaluate such an elusive thing as an entire
computing environment we will need to consider many methodological
issues. Since I am no expert on such evaluations I am going to require
a good deal of help, and so far more than 20 people have volunteered.

	The problems we face are not just how to measure the performance 
of these Lisp systems, but how to take a diverse set of benchmark
programs and get them to run on systems very different than those they
were written for.

	I hope at the end of this project to be able to report not
only times for programs, but descriptions of systems, translation
problems, and a general guide to the world of Lisp computing.

	The first substantive mailing will be a quick list of 
methodological points we need to consider. This list is not complete,
but aims at the directions we need to go before actual timing runs
can be performed.

	Thank you for your help in this project.

			Dick Gabriel (RPG@SAIL)

Here's the first message, which you missed:
∂03-Mar-81  1616	RPG  	Methodology considerations:  
To:   "@LSPTIM.DIS[P,DOC]" at SU-AI   
1. GC time is critical. Every timing should include CPU time
as measured by the CPU clock plus GC time. If GC time is not
accounted in the LISP, we should include a standard test, such
as a program that creates a large, standard structure (balanced
tree of some sort?) and then count CPU time on a forced GC, resulting
in a seconds/cell figure for each machine.  Maybe we should do this
in addition to the benchmarks? [In fact, measuring GC time in a meaningful
way is que difficult due to different algorithms. Perhaps a range of
tree structures? Maybe not all algorithms are symmetric on car/cdr?]

2. Translating non-standard control structures can be a problem.
What about non-local goto's ala catch/throw? These can be simulated
with ERROR/ERRSET or with spaghetti hackery in InterLisp. These questions
should be addressed by having each translator propose various techniques 
and having the source decide on which to use. Or maybe we should use
all such methods?

3. All non-LISP syntax must be pre-expanded (i.e. CLISP) to allow
the local system to optimize as appropriate.

4. Both interpreted and compiled code will be timed.
All code will have macros pre-expanded (at local sites?) so that
efficiencies due to incremental destructive expansion can be
eliminated. 

5. Numeric code should have all types announced to the translators by the
sources so that optimizations can be made without deductions.
All other such information must be provided.

6. The size of such programs can be arbitrary, though translating
MACSYMA may take a while to do. 

7. All tools developed to aid translation should be forwarded to
RPG so that they may be evaluated and distributed if appropriate.

8. Programs that are useful to a source but impossible (in a
practical sense) to translate should merit special attention to 
decide if there is a useful feature involved.

9. (from GLR)
Timing various programs is a good idea, but the data will
be a little hard to extrapolate.  Is anyone going to measure
parameters such as CONS rate, time to invoke a procedure,
and add times? [Not only that, but number CONSing and its
effect on numeric calculations should be measured as well. Here
RPG will appoint some experts (like JONL) to make up some
good numeric testing code to isolate implementational problems
with specific aspects of Lisp).

10. People should supply some estimate of the runtime and the results
of their benchmarks. Such things as 2 minutes of CPU on a KL running
TOPS-10 is OK, but for unfamiliar machines/Lisps this may not be good enough.
Try to aim at some estimate in terms of the number of CONSes or function
call depth.

11. Every candidate system should have a detailed description of that
description (CPU architecture, memory size, address size, paging algorithm...)

∂04-Mar-81  0449	Robert H. Berman <RHB at MIT-MC> 	Lisp Timing Mailing List  
Date: 4 March 1981 07:48-EST
From: Robert H. Berman <RHB at MIT-MC>
Subject:  Lisp Timing Mailing List
To: RPG at SU-AI
cc: " @LSPTIM.DIS[P,DOC]" at SU-AI


May I suggest the following as a benchmark for numerically orientated
problems: the time it takes to do a fast fourier transform of, say length
1024, of real or complex data.


I have been collecting over a period of 6 years timings for this
statistics on a wide range of machines (nearly 50) and compilers,
assemblers etc. Thus, this benchmark would be very helpful
in relating Lisp machine performance to many other architectures.

I have a class of problems that I run that use transform methods
for solving partial differential equations and performing
covolutions and smoothing. Hence my interest in ffts.

Several points to keep in mind about this benchmark:

1. On LM's there is a difference between small flonums and flonums.
Suppose it were done with macsyma's bigfloat package to allow
for  extended precision.

2. Fast Fermat (Integer) Transforms are also helpful here. Integers
in the range 0 to 2↑20, say, can be as useful as small
flonums, but they use only integer arithmatic.

3. Power of 2 transforms, and their families, radix 2, rdaix 4+2,
radix 8+4+2, etc, can do some of their by shifting, rather than
dividing. But other bases, i.e. 96 instead of 64 or 128, can be more
efficient than doubling the transform length.

4. The internal data representation can make a difference. Local
variables on the stack of a subroutine are faster to reference than
arrays. I understand there is an architecturial limit of 64 stack
variables on LM's. Would it ever to be possible to change it? In a 4+2
algorithm, the fastest trasnform using stack variables only could then
be a 256 length transform, and then there would a degradation for
longer transforms that used array references.

5. I don't have a completely good feeling yet for all of the
subtleties and speedups available for microcoding a problem
vs writing in lisp, interpreting, compiling, or compiling
into microcode. When a segment of code is going to be used over and
over again, and the code isn't going to change, shouldn't it be
in microcode?

6. I can make several fft packages avaialable in lisp now. One is a
naive radix 2 butterfly algorithm, designed to be short to write and
implement in a hurry. The second is a radix 4+2 and radix 96 familiy
of transforms that were writen for a vector architecure like the Cray,
but translated nearly literally into lisp. Because the Cray encourages
temporary vectors, this radix 4+2 algorithm uses a lot of storage,
rather than transforms in place. I have not yet looked into the issues
I raised in 4.or 5., but these need attention as well.

--  Bob Berman  (rhb@mc)

∂04-Mar-81  0957	Scott.Fahlman at CMU-10A 	Re: Translators    
Date:  4 March 1981 1212-EST (Wednesday)
From: Scott.Fahlman at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Translators
CC: guy.steele at CMU-10A
In-Reply-To:  Dick Gabriel's message of 3 Mar 81 19:22-EST
Message-Id: <04Mar81 121256 SF50@CMU-10A>


Dick,
I notice in an earlier message that it was contemplated that a full set of
timings be done on CMU's modified TOPS-10 system running MACLISP.  As a 
point of information, all serious Maclisp work here has been moved to the
2060, now that we have one.  I think that running benchmarks for an obsolete
and obviously brain-damaged system which nobody should ever again be forced to
use for anything seriosu would be a waste of time, and I am not likely to
want to devote any effort to it (although the task would be relatively small
if we get things already translated into legal Maclisp, since the differences
are few).  I could devote some small amount of effort to benchmarking TOPS-20
maclisp, though there are other sites that have this as well and I would prefer
that they carry a good deal of the load on this.

We are willing, even eager, to get timings for Spice Lisp on the extended PERQ
(once we get an extended PERQ), but this effort will lag the others by 6 months
of so while we get our act together.  I would prefer to save our translation
and measurement cycles for that task, since lots of places can check out a
Maclisp.

All of this looks fairly interesting.  It may generate more heat than light,
but at least there will be some data to base the flames on, and the translation
aids should be a very useful side effect.
-- Scott

∂04-Mar-81  0959	CSVAX.char at Berkeley 	lisp benchmarking    
Date: 4 Mar 1981 09:00:47-PST
From: CSVAX.char at Berkeley
To: rpg@sail
Subject: lisp benchmarking
Cc: anlams!boyle@Berkeley, CSVAX.char@Berkeley, CSVAX.fateman@Berkeley

Richard Fateman has informed me of the effort you're organizing to
compare Lisp systems.  James Boyle (csvax.anlams!boyle@BERKELEY) and I
(csvax.anlams!char@BERKELEY) would like to be put on your mailing list
for lisp benchmarking.  We have a program, part of a program
transformation system, which you may be interested in including in the
benchmarking.  It currently runs on Franz, and on the IBM370 Lisp
available at Argonne.  We could create a special version of the code
that predefines variables instead of reading their values off of files;
I/O was the only real problem I had in converting the program to Franz
this past fall.  It is an interesting program in that it is a "real"
application of Lisp -- people have used the transformation system for
development of math software here at Argonne, as preprocessor to a
theorem prover, etc.  It is not so interesting from the viewpoint of
exercising a lot of different Lisp features --  mainly list access and
creation, and CONDing.  Jim Boyle estimates that an interesting
benchmark run would take 30-60 min. of Vax cpu time running under Franz
(interpreted).  This might be too long for benchmarking, if testing
resources are expensive.

∂04-Mar-81  1627	HEDRICK at RUTGERS 	sometime of possible interest 
Date:  4 Mar 1981 1919-EST
From: HEDRICK at RUTGERS
Subject: sometime of possible interest
To: rpg at SU-AI

I am not entirely clear what is going on with your lisp timings
mailing list.  However you may possibly be interested in
looking at the file [rutgers]<hedrick>newlisp.  You can FTP it
without logging in I think.  I you have to log on over FTP,
specify user name ANONYMOUS and any password.  This describes
the various tests I have done during design of ELISP, the new
extended addressing version of UCI Lisp for Tops-20.  I think
ELISP will not have much in the way of innovations.  It in
intended to be quite "classical".  I.e. something that we know
how to do, and know that the results of will be useful for us.
It is Lisp 1.6/UCI Lisp constructed with Lisp machine technology
(to the extent we can do it on the 20, no CDR-coding, since that
requires micro code changes.  But we do using a copying GC and
everything is done with typed pointers.)  I expect the performance to be
similar to that of UCI Lisp, as the basic structures will be the same.
It will differ mostly because of completely different data
representations and GC methods.  And because of extended addressing,
which on the 20 still has performance problems.  NEWLISP refers to these
problems without explaining them.  The main problem is in the design of
the hardware pager. This is the thing that maps virtual to physical
addresses.  It should be associative memory, but is implemented by a
table. The net effect of the table is that the same entry is used for
pages 1000, 3000, 5000, 7000, etc.  In fact, which line in the table is
used is determined by bits 774 of the page number (i.e. pages
1000,1001,1002, and 1003 are all stored in the same line).  There is a
kludge to prevent odd numbered sections from interfering with even
numbered ones (The section number is bits 777000), which is why I listed
pages 1000, 3000,etc., and not 0, 2000, ...  If you happen to be
unlucky, and have code in page 1000, a stack in page 3000, and
data in page 5000, your code can easily run a factor of 20 times 
slower than it would otherwise.  By carefully positioning various
blocks of data most of the problems can be prevented.

Please not that ELISP is intended to be a quick and safe implementation.
That means that I am not trying to get the last few percent of efficiency.
I am doing things in ways that I believe will not require much debugging
time, even at the price of speed.  This is because I am a manager, and
don't have much time to write code or to support it after it is finished.
-------

∂06-Mar-81  1301	HES at MIT-AI (Howard Shrobe) 	Methodology considerations:  
Date:  6 MAR 1981 1556-EST
From: HES at MIT-AI (Howard Shrobe)
Subject: Methodology considerations:  
To: RPG at SU-AI

Re your comment about including GC time. I agree wholheartedly and have been
having a bit of disagreemnet with Fateman about same.  IN addition I would
suggest trying to get statistics on how time shared machines degrade with load.
A lot of folks are trying to make estimates of personal versus time shared and
such choices can only be made if you know how many people can be serviced by a
VAX (KL-10, 2060, etc.) before performance drops off.  Some discussion of this
issue would be useful to such folks.

howie Shrobe

Subject: Lisp Timings Group
To: rpg at SU-AI
cc: correira at UTEXAS-11

Hi.  I've been involved with the maintenance/extensions of two lisps, UTLISP
(on CDC equipment) and UCILISP (Rutgers Lisp, etc).  One of the things that I
did in our version of UCILISP that was missed by Lefaivre (and, hence, Meehan)
was to speed up the interpreter.  (Lefaivre got a copy of my source shortly
before I made the speed ups.)  It actually wound up being a few percent faster
than MACLISP (both on the same TOPS-10 machine).  (I believe MACLISP source
code is close enough to make the same changes - this was very old/unchanged
code in the interpreter.)

Anyway, I'd like to volunteer running the tests on UCI Lisp on both a 2060
(TOPS-20) and a KI-10 (TOPS-10).  I'm a little hesitant about committing
myself to too much work but it looks like you'll have several people running
UCI Lisp so maybe the work will be spread around.  (I guess this means that
you should add me to your LISPTIMING and LISPTRANSLATORS lists.)

For easily transportable code, I'll run it on UTLISP but for any extensive
changes I'll pass.  The current person who is in charge of that Lisp may send
you a separate note.  I've tried to encourage him to do so.  The UTLISP was
(at one time) judged by the Japanese surveyers to be the fastest interpreted
Lisp.  (That is my recollection of the first survey that we were involved in,
sometime about the mid 70's?.  I'm sure it was solely due to the speed of the
hardware.)  It is not an elegant Lisp and has a lot of faults but is a pretty
fast interpreter.  The compiler is a crock - when it works.  It was someone's
masters thesis in the early 70's.

I strongly suggest that you run each of the various Lisps on different CPUs
whenever possible.  There was a note out last fall that compared Interlisp,
Maclisp, and UCI Lisp.  You may remember that I sent out a note that
complained that the timings for UCI Lisp were obviously on a different CPU
(probably a KI-10 compared to KL-10 or 2060).

I also suggest that while general purpose benchmarks may show a general
tendency, we should strive for timings of specific operations.  Such things as
CONS (including GC time), variable binding, function calling, arithmetic,
property list manipulation, array manipulation, stack manipulation (I guess
that's in function calling/variable binding), tree traversing (CAR/CDR
manipulations), FUNARG hacking, COND evaluations, PROG and looping hacking,
etc.  Personally my programs don't use much arithmetic so I don't think that's
too important but obviously some people do.

It would also be useful if people could supply timings of the machine the LISP
is run on.  Such things as instruction fetch times and memory speed are
obviously important.  This might be useful in comparing two Lisps on different
machines.  (Exactly how does a CYBER-170/750 compare with a DEC-2060?)

I don't think that the programs need to be very big or long-running.  They
just need to run long enough (10 seconds?) to minimize minor timing problems.
The important thing is that the various programs concentrate on one specific
area as much as possible.  Of course, all this needs to be balanced by some
programs that have a general mix of operations.

Another possible test, which is not really a timing test, would be to give all
us hackers some particular programming project which would take on the order
of an hour to do.  We would each do it in our own Lisp and report how long it
took us to do it (clock time) and how much resources we used (CPU time).  It
might be also reasonable to report how we did it (eg, used EMACS or some other
editor to write/fix the code versus edit in Lisp itself, how many functions
(macros?), how much commenting, how transparent/hackish the code is, etc.)  I
don't mean that this should be a programming contest but it might give some
idea what is involved in writing a program in each Lisp.  This involves
composing, executing, debugging, and compiling.  I feel this would be a truer
test of a LISP in a typical research situation if we could (hah!) discount the
various programmers skills/resources.  (This suggestion should really stir
up some flames!!)

	Mabry Tyson
	(tyson@utexas-11)
-------

∂10-Mar-81  0727	correira at UTEXAS-11  	lisp timings    
Date: 10 Mar 1981 at 0916-CST
From: correira at UTEXAS-11 
Subject: lisp timings
To: rpg at su-ai
cc: atp.tyson at utexas-20

If anyone is interested, I would be willing to do the work to run the
timing programs for UTLISP Version 5.0.  This is the latest version of
UTLISP, containing code to drag the dialect into the 20th Century of
LISP interpreters.  It has been my experience in the past that
most people shrug UTLISP off with a "oh, that's the one with the extra
pointer field" comment, but I think it is a pretty good LISP now and should be
included in the timings effort. However, the compiler is still a complete
crock (although I am working on a new one, it won't be ready for at least
6 months), so I will pass on doing compiler timings.  Please add my name to
the LISPTIMING and LISPTRANSLATORS mailing lists.

					Alfred Correira
					UTEXAS-11
-------

∂03-Mar-81  2109	Barrow at SRI-KL (Harry Barrow ) 	Lisp Timings    
Date:  3 Mar 1981 1727-PST
From: Barrow at SRI-KL (Harry Barrow )
Subject: Lisp Timings
To: rpg at SU-AI

	I would certainly like to be on your list of recipients of
LISP timing information.   Please add BARROW@SRI-AI to your list.

Did you know that Forrest BAskett has made some comparative timings
of one particular program (cpu-intensive) on several machines, in
several languages?   In particular, LISP was used on DEC 2060, KL-10,
KA-10, and MIT CADR machines   (CADR came out comparable with a KA-10,
but about 50% better if using compiled microcode).

What machines do you plan to use?   I would be very interested to
see how Dolphins, Dorados, and Lisp machines compare...


				Harry.



-------

Yes, I know of Baskett's study. There is at least one other Lisp
study, by Takeuchi in Japan.

So far we have the following Lisp systems with volunteers to
do the timings etc:

Interlisp on MAX, Dolphin, Dorado
MacLisp on SAIL
InterLisp on SUMEX
UCILISP on Rutgers
SpiceLisp on PERQ
Lisp Machine (Symbolics, CADR)
Maclisp on AI, MC, NIL on VAX, NIL on S1 (if available)
InterLisp on F2
Standard Lisp on TOPS-10, B-1700, LISP370
TLC-lisp and muLisp on z-80
Muddle on DMS
Rutgers Lisp
Lisp Machine
UCILISP and MLISP on TOPS-10, TOPS-20
Jericho InterLisp
some Z80 LISP
Multics Maclisp
Cromemco Lisp on Z80
Franz Lisp on VAX UNIX
∂02-Mar-81  0004	Charles Frankston <CBF at MIT-MC> 	timings   
Date: 2 March 1981 00:55-EST
From: Charles Frankston <CBF at MIT-MC>
Subject: timings
To: CSVAX.fateman at BERKELEY
cc: LISP-FORUM at MIT-MC, masinter at PARC-MAXC, RWS at MIT-XX,
    guttag at MIT-XX

It is rather obvious that the timings you distributed are wall times for
the Lisp Machine, whereas the Vax and MC times count only time spent
directly executing code that is considered part of Macsyma.  Ie. the
Vax and MC times exclude not only garbage collection, but operating system
overhard, disk i/o and/or paging, time to output characters to terminals, etc.

I submit comparing wall times with (what the Multics people call) "virtual
CPU" time, is not a very informative excercise.  I'm not sure if the Lisp
Machine has the facilities to make analagous measurements, but everyone
can measure wall time, and in some ways thats the most useful comparison.
Is anyone willing to try the same benchmarks on the Vax and MC with just
one user on and measureing wall times?

Also, are there yet any Lisp machines with greater than 256K words?  No
one would dream of running Macsyma on a 256K word PDP10 and I presume that
goes the same for a 1 Megabyte Vax.  The Lisp Machine may not have a time
sharing system resident in core, but in terms of amount of memory needed
for operating system overhard, the fanciness of its user interface
probably more than makes up for that.  I'll bet another 128K words of
memory would not be beyond the point of diminishing returns, insofar
as running Macsyma.

Lastly, the choice of examples.  Due to internal Macsyma optimizations,
these examples have a property I don't like in a benchmark.  The timings
for subsequent runs in the same environment differ widely from previous
runs.  It is often useful to be able to factor out setup times from a
benchmark.  These benchmarks would seem to run the danger of being dominated
by setup costs.  (Eg. suppose disk I/O is much more expensive on one system;
that is probably not generally interesting to a Macsyma user, but it could
dominate benchmarks such as these.)

I would be as interested as anyone else in seeing the various lisp systems
benchmarked.  I hope there is a reasonable understanding in the various
Lisp communities of how to do fair and accurate, else the results will be
worse than useless, they will be damaging.


∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
Date: 17 Mar 1981 11:54 PST
From: Masinter at PARC-MAXC
Subject: Re: GC
In-reply-to: RPG's message of 16 Mar 1981 1234-PST
To: Dick Gabriel <RPG at SU-AI>
cc: LispTiming@su-ai, LispTranslators at SU-AI

Interlisp-D uses a reference-count garbage collection scheme. Thus, "garbage
collection" overhead is distributed to those functions which can modify reference
counts (CONS, RPLACA, etc.) with the following important exceptions:

	no reference counts are maintained for small numbers or literal atoms
	references from the stack are not counted

Reference counts are maintained in a separate table from the data being counted.
The table can be thought of as a hash table. In addition, the "default" entry in
the table is reference count = 1, so that in the "normal" case, there is no table
entry for a particular datum.

"Garbage collection" then consists of (a) sweeping the stack, marking data with a
"referenced from the stack" bit in the reference count table if necessary, (b)
sweeping the reference count table, collecting those data whose reference counts
are 0 and which are not referenced from the stack.

--------------

Because of this scheme, it is very difficult to measure performance of Interlisp-D
independent of garbage collection, because the overhead for garbage collection is
distributed widely (although the timing for the sweep phase can be separated
out).

Secondly, the choice of a reference count scheme over the traditional
chase-and-mark scheme used by most Lisps was conditioned by the belief that
with very large virtual address spaces, it was unreasonable to require touching
all active storage before any garbage could be collected.

This would indicate that any timings should take into consideration paging
performance as well as garbage collection overhead, if they are to accurately
consider the overall performance picture.

Larry

∂16-Mar-81  1429	HEDRICK at RUTGERS 	Re: Solicitation    
Date: 16 Mar 1981 1725-EST
From: HEDRICK at RUTGERS
Subject: Re: Solicitation  
To: RPG at SU-AI
cc: lispsources at SU-AI
In-Reply-To: Your message of 16-Mar-81 1526-EST

ELISP: extended R/UCI lisp.  This will be a reimplementation of
Rutgers/UCI lisp for Tops-20 using extended (30-bit) addressing. It is
implemented using typed pointers and a copying GC, but will otherwise be
almost exactly the same as R/UCI lisp (unless you are accustomed to
CDR'ing into the innards of strings, etc.).
  hardware - Model B KL processor or Jupiter.  I am not clear whether
	a 2020 has extended addressing.  If so that would also be
	usable.
  OS - Tops-20, release 5 or later (release 4 useable with minimal
	patching)
  binding type- shallow dynamic, with same stack mechanisms as
	UCI Lisp
  compiler - Utah standard lisp transported to our environment

At the moment performance appears to be the same as R/UCI Lisp, except
that the GC takes about twice as long for a given number of CONS cells
in use.  The time per CONS may be less for substantial programs, since
we can afford to run with lots of free space, whereas our big programs
are pushing address space, and may not be able to have much free space,
hence GC a lot.

At the moment I have an interpreter that does a substantial part of Lisp
1.6.  I hope to finish Lisp 1.6 by the beginning of the summer.  I also
hope to have a compiler by then.  I am doing the interpreter personally,
and one of my staff is doing the compiler.  I am implementing R/UCI
lisp roughly in historical order, i.e. Lisp 1.6 first, then UCI lisp,
then Rutgers changes, though a few later features are slipping in (and
I am not doing anything I will have to undo).

Note that I have little if any interest in performance.  I want to match
R/UCI lisp, since users may complain if things suddenly slow down, but
that is about it.  I am more concerned about reliability (since I will
have little time to maintain it) and how long it takes to write it
(since I have little time to write it).  Our users are doing completely
traditional Lisp work, and have little or no interest in more flexible
binding or control semantics (we supplied a version of R/UCI lisp with
Scheme semantics, and no one was interested), nor in speed in
arithmetic.  The system is designed to be modular enough that
improvements can be done as needed.  I am giving some thought to
transportability, though not as much as the Utah folks. I think we
should be able to transport it to a system with at least 16 AC's and a
reasonable instruction set (e.g. VAX) with 2 man-months or less.

As far as the hardware we have available for testing, we will shortly
have 1M of MOS memory, 4 RP06's on 2 channel, and a model B KL processor
(the model matters since the model B is faster than the model A.  Note
that the processor model number is almost the only variable you care
about in a 20, but it is not derivable from the DEC marketing
designation, since a 2050 or 2040 may be either model.  However a 2060
is always model B).
-------

∂16-Mar-81  1433	HEDRICK at RUTGERS 	Re: GC    
Date: 16 Mar 1981 1728-EST
From: HEDRICK at RUTGERS
Subject: Re: GC  
To: RPG at SU-AI
cc: lisptranslators at SU-AI
In-Reply-To: Your message of 16-Mar-81 1534-EST

; the garbage collector.  its init routine is called gcinit and
; takes these args:
;   - the beginning of constant data space, which is really at the
;	start of the first of the two data spaces
;   - the first word beyond the constant data space, which is the
;	beginning of the usable part of the first data space
;   - the start of the second data space
;   - the first word beyond the second data space
	; garbage collector variables:
	;free - last used location in data space
	;lastl - last legal location in this data space - 1.  Trigger a GC if
	;   someone tries to go beyond this.  
	;stthis - start of this data space
	;enthis - end of this data space
	;stthat - start of other data space
	;enthat - end of other data space
	;stcnst - start of constant space
	;encnst - end of constant space

	.scalar lastl,stthis,enthis,stthat,enthat,stcnst,encnst

freesz==200000	;amount of free space at end of GC

   <<<initialization code omitted>>>


;This is a copying GC, modelled after the Lisp Machine GC, as
;described in Henry Baker's thesis.  There are two data spaces, old and new.
;A GC copies everything that is in use from old to new, and makes new the
;current one.  The main operation is translating objects.  If the object
;is absolute, e.g. an INUM, this is a no-op.  Only pointers into the old
;space are translated.  They are translated by finding the equivalent object
;in the new space, and using its pointer.  There are two cases:
;  - we have already moved the object.  In this case the first entry of
;	the old space copy is a pointer to the copy in new space.  These
;	pointers have the sign bit on, for easy detection.
;  - we have not moved the object.  In this case, we copy it to the end of
;	new space, and use the pointer to the beginning of this copy.
;At any given time, we have a pointer into new space.  Everything before
;this pointer has been translated.   Everything after it has not.  We also
;have to translate the stack and the constant area.  Indeed it is translating
;these areas that first puts something into new space to translate.

mark==400000,,0		;bit that says this has already been translated

;Because there are four different areas to translate, we have a separate
;routine to do the translation.
;  gctran:
;	w3 - first address to be translated.  W2 is updated, and is the
;		pointer mentioned above.  I.e. everything before W2 has
;		been translated
;	w4 - last address to be translated.

;The code within gctran avoids the use of the stacks, in order to avoid
;performance problems because of addressing conflicts between the stack
;and the areas being GC'ed.

gctran:	move o1,(w3)		;o1 - thing to be translated
	gettyp o1		;see what we have
	xct trntab(w2)		;translate depending upon type
	camge w3,w4		;see if done
	aoja w3,gctran		;no - next
	ret

;GCTRAX - special version of the above for doing new space.  Ends when
;we reach the free pointer
gctrax:	move o1,(w3)		;o1 - thing to be translated
	gettyp o1		;see what we have
	xct trntab(w2)		;translate depending upon type
	camge w3,free		;see if done
	aoja w3,gctrax		;no - next
	ret

;;TYPES
trntab:	jsp w2,cpyatm		; atom
	jfcl			;  constant atom
	jsp w2,cpycon		; cons
	jfcl			;  constant cons
	jsp w2,cpystr		; string
	jfcl			;  constant string
	jsp w2,cpychn		; channel
	jfcl			;  constant channel
	jfcl			; integer
	jsp w2,cpyrea		; real
	jrst 4,.		; hunk
	jfcl			; address
	jsp w2,cpyspc		; special

;here to translate a CONS cell - normally we copy it and use addr of new copy
cpycon:	skipge o2,(o1)		;do we already have a translation in old copy?
	jrst havcon		;yes - use it
	dmove o2,(o1)		;copy it
	dmovem o2,1(free)
	xmovei o2,1(free)	;make address into CONS pointer
	tlo o2,(object(ty%con,0))
	movem o2,(w3)		;put it in place to be translated
	tlc o2,(mark\object(ty%con,0)) ;make a pointer to put into old copy
	movem o2,(o1)		;and put it there
	addi free,2		;advance free list
	jrst (w2)

havcon:	tlc o2,(mark\object(ty%con,0)) ;turn into a real cons pointer
	movem o2,(w3)		;put in place to be translated
	jrst (w2)

  <<<the rest of the types are like unto this>>>
-------

∂16-Mar-81  1810	Scott.Fahlman at CMU-10A 	Re: GC   
Date: 16 March 1981 2109-EST (Monday)
From: Scott.Fahlman at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: GC
In-Reply-To:  Dick Gabriel's message of 16 Mar 81 15:34-EST
Message-Id: <16Mar81 210911 SF50@CMU-10A>


Dick,
I believe we gave you a copy of the SPice Lisp internals document?  If so,
our GC algorithm is described there.  We can run with GC turned off, though
we pay some overhead anyway.  If incremental GC is turned on, the cost is
so spread out that it would be impossible to separate.  Perhaps the only fair
thing to do, if the thingof interest ultimately is large AI jobs, is to run
big things only or smallthings enough times that a few GC will have happened.
Then you can just measure total runtime.
-- Scott

∂16-Mar-81  1934	PLATTS at WHARTON-10 ( Steve Platt) 	lisp -- my GC and machine specs  
Date: 16 Mar 1981 (Monday) 2232-EDT
From: PLATTS at WHARTON-10 ( Steve Platt)
Subject: lisp -- my GC and machine specs
To:   rpg at SU-AI

  Dick, just a reminder about this all...  it is all describing a
lisp for Z80 I'd like to benchmark out of curiosity's sake.
  1) All times will have to be done via stopwatch.  I might write a
quick (DO <n> <expr>) to repeat evaluation oh, say, 100 times or so
for better watch resolution.  GC time will *have* to be included
as I don't seperate it out.
  2) I plan to be speaking to John Allen about his TLC lisp -- as there;s
probably much similarity, I'd like to benchmark his at the same time.
I'll be sending him a copy of this letter.
 
  3) GC is a simple mark'n'sweep.  At some future time, I might replace
this with a compressing algorithm, makes core-image saving simpler.
I GC cons cells and atom space, but not number or string space (number
space for bignums (>1000 hex or so, use pointers for small integers),
string space for pnames.)  Proper strings might be implemented in the
future sometime.
  4) Lisp is an unreleased CDL lisp, still under development.  It works
under CPM 1.4 or anything compatible with that, on a Z80.  CDL Lisp has
its roots in Maclisp, I guess you'd say.  Binding is deep.  Compiler?
Hah -- maybe after my dissertation is finished...  Macros -- the same.
I don't really view macros as essential, so they have a relatively low
priority... both have been thought about, but won't be benchmarkable.
  5) The hardware environment is relatively constrained.  48K physically
right now, may be up to 60K by benchmark time... (this figures into
roughly 8K free cells, the additional 12K will add 3K cells...)
No cache, only 2 8" floppies.  A typical "good" home system.
 
  After reading this all, it's probably relatively depressing when
compared to some of the major machines being benchmarked.  But it is
representative of the home computing environment...

  If you have any more specific questions, feel free to ask.

   -Steve Platt (Platts @ Wharton)

∂17-Mar-81  0745	Griss at UTAH-20 (Martin.Griss) 	Re: GC      
Date: 17 Mar 1981 0835-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: GC  
To: RPG at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 16-Mar-81 1334-MST

Standard LISP runs on a variety of machines, with existing LISPs, each with
a different GC; we will choose a machine set, and briefly decsribe;

What is standard AA analysis???
M
-------

∂17-Mar-81  0837	Robert S. Boyer <BOYER at SRI-CSL> 	Solicitation  
Date: 17 March 1981  08:34-PST (Tuesday)
From: Robert S. Boyer <BOYER at SRI-CSL>
To:   Dick Gabriel <RPG at SU-AI>
Cc:   Boyer at SRI-CSL
Subject: Solicitation  

The machine on which I can run LISP timings is a Foonly F2,
which emulates a DEC KA processor and a BBN pager, and runs
a variant of Tenex called Foonex.  It has 1/2 million words
of 500 nanosecond memory, no cache, no drum, and a CDC
Winchester disk.

I have used Interlisp extensively, but I haven't studied the
compiler output or MACRO sources enough to claim expertese
at optimal coding.

I am marginally familiar with Maclisp now and I plan to
become more familiar soon.

For the purpose of getting a complete set of F2 vs. 2060
timings, I'd be willing to run tests of other PDP-10 LISPs
that are Tenex compatible, provided the tests can be
performed without too much understanding of the LISP
variants.

I have a benchmark that J Moore and I constructed a few
months ago to compare Interlisp and Maclisp.  The files on
ARPANET host CSL named <BOYER>IREWRITE and <BOYER>MREWRITE
contain, respectively, Interlisp and Maclisp code for a far
from optimal rewrite style theorem prover.  (To FTP log in
as Anonymous, password foo.)  MREWRITE is coded so that,
except for the statistics gathering, it is also in Franz
LISP.  To start, you invoke (SETUP).  Then run (TEST), as
many times as you want.  TEST returns some statistics -- but
I assume that RPG will want to standardize here.  (TEST)
turns over storage very rapidly, recurses a lot, does very
little arithmetic, and engages in no fancy structuring (e.g.
RPLACs).  Our intention in coding TEST was to produce
quickly a small facsimile of the heart of our rather large
theorem-proving system in order to compare LISP times.

By intentionally coding a program that would be easy to
translate from Interlisp to Maclisp, we did injustice to
both LISPs.  For example, we used recursion where we might
have used the I.S.OPR construct in Interlisp or the DO
construct in Maclisp -- or a MAP construct in either.

∂17-Mar-81  0847	Robert S. Boyer <BOYER at SRI-CSL> 	LISP Timings  
Date: 17 March 1981  08:43-PST (Tuesday)
From: Robert S. Boyer <BOYER at SRI-CSL>
To:   RPG at SU-AI
Subject:  LISP Timings
cc:   Boyer at SRI-CSL

Could we include a cost column in the final grand tally?  It
has been remarked that many people are trying to decide
which LISP system to use, now and in the future.  Cost will
be an important criterion.  Maintenance charges should be
included since over the life of a machine, they may approach
the purchase price.  It should be relatively easy for each
person who voluteers a machine to indicate the purchase
price and maintenance charges.

∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
Date: 17 Mar 1981 11:54 PST
From: Masinter at PARC-MAXC
Subject: Re: GC
In-reply-to: RPG's message of 16 Mar 1981 1234-PST
To: Dick Gabriel <RPG at SU-AI>
cc: LispTiming@su-ai, LispTranslators at SU-AI

Interlisp-D uses a reference-count garbage collection scheme. Thus, "garbage
collection" overhead is distributed to those functions which can modify reference
counts (CONS, RPLACA, etc.) with the following important exceptions:

	no reference counts are maintained for small numbers or literal atoms
	references from the stack are not counted

Reference counts are maintained in a separate table from the data being counted.
The table can be thought of as a hash table. In addition, the "default" entry in
the table is reference count = 1, so that in the "normal" case, there is no table
entry for a particular datum.

"Garbage collection" then consists of (a) sweeping the stack, marking data with a
"referenced from the stack" bit in the reference count table if necessary, (b)
sweeping the reference count table, collecting those data whose reference counts
are 0 and which are not referenced from the stack.

--------------

Because of this scheme, it is very difficult to measure performance of Interlisp-D
independent of garbage collection, because the overhead for garbage collection is
distributed widely (although the timing for the sweep phase can be separated
out).

Secondly, the choice of a reference count scheme over the traditional
chase-and-mark scheme used by most Lisps was conditioned by the belief that
with very large virtual address spaces, it was unreasonable to require touching
all active storage before any garbage could be collected.

This would indicate that any timings should take into consideration paging
performance as well as garbage collection overhead, if they are to accurately
consider the overall performance picture.

Larry

∂17-Mar-81  1218	RPG  	Bureaucracy   
To:   lisptiming at SU-AI   
In sending mesages around, the following facts are useful:
	RPG is on LISPSOURCES which is equal to
LISPTRANSLATORS, which is a subset of LISPTIMING.

So there is no need to send me a copy of everything, nor
is it necessary to have LISPTIMING and LISPSOURCES on the same
header, for example. Thanks.
			-rpg-

∂17-Mar-81  1921	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: Solicitation    
Date:     17 March 1981 2142-est
From:     Bernard S. Greenberg       <Greenberg at MIT-Multics>
Subject:  Re: Solicitation
To:       lispsources at SU-AI
Cc:       Multics-Lisp-people at MIT-MC

Well, Multics MacLisp, letsee:

Multics Maclisp, consisting of an interpreter, compiler, LAP (not used
by the compiler, tho), runtime, and utilities, was developed by
MIT Lab for Computer Science (LCS) in 1973 with the aim of exporting
the Macsyma math system to Multics (of which MIT-Multics was the only
one at the time).  Dave Reed (now at LCS) and Dave Moon (now at MIT-AI
and Symbolics, Inc.) were the principal implementors then, and
Alex Sunguroff (don't know where he is now) to a lesser degree.
Reed and Moon maintained it to 1976, I maintained it until now.
Its maintenance/support status since my flushance of Honeywell
(December 1980) is now up in the air, although Peter Krupp
at Honeywell is now nominally maintainer.

The interpreter and general scheme of things were developed partly
on the experience of PDP-10 Maclisp, visavis running out of space,
and an earlier Multics Lisp by Reed, visavis better ways to do this
on Multics.   Multics MacLisp features virtually infinite address
space (limited by the size of a Multics Process directory, which
is virtually unlimited), a relocating/copying garbage collector,
strings, bignums and other MacLisp features, general compatibility
with (ITS) MacLisp, and very significantly, the facility to interface
to procedures in other languages (including Multics System routines)
on Multics.

With the notable exception of the compiler, which is a large (and
understandable, as well as effective) Lisp program of two large
source files, the system is in PL/I and Multics assembler: the
assembler portions, including notably the evaluator, are that
way for speed.  The language was designed to be as close to
ITS Maclisp as possible at the time (1973), but has diverged some.
The compiler was developed as two modules, a semantics pass
reworked from the then-current version of the fearsome ITS
COMPLR/NCOMPLR (1973), and the code generator was written anew
by Reed (1973), although it uses NCOMPLR-like strategies
(I have a paper on this subject).

Although used in the support of Macsyma, the largest and most important
use of Multics Maclisp is as the implementation and extension language
of the Multics Emacs "text processing and video process management"
system.  Other large subsystems in Multics Maclisp over the years
have included a Multics crash and problem analysis subsystem and
a management-data modeling system (KOMS, about which I know little).

Pointers in Multics Maclisp are 72-bit, which includes a 9-bit
type field.  Non-bignum numbers (fixna and flona) are directly
encoded in the pointer, and do not require allocation, or the
hirsute "PDLNMK" scheme of ITS MacLisp. Symbols and strings are
allocated contiguously, and relocated at garbage-collect time.
Binding is the standard MacLisp shallow-binding (old values
saved on PDL, symbol contains "current" value).  Other Maclisp
language accoutrements (property lists, functional properties,
MacLisp macros, etc.) exist.

"A description of my OS:"

Well, the Multics Operating System enjoys/suffers a paged,
segmented virtual memory, implementing virtual storage and virtual
file access in a unified fashion. The paradigm is so well-known
that I cannot bear to belabor it any more.  The net effect
on Lisp is a huge address space, and heavy interaction
between the GC algorithm and performance.  Multics will run
in any size memory between 256K words and 16 million (36 bit
words) The Multics at MIT (there are about three dozen multices
all over the world now) has 3 million words of memory,
which I believe is 1 microsecond MOS. The MIT configuration runs
3 cpus - other sites vary between 1 and 5.  The cache per
CPU is 2k words, and is "very fast", but the system gets CPU limited,
and can rarely exceed 1 MIP per cpu (highly asynchrounous processor),
although powerful character and bit string handling instructions
can do a lot faster work than a 1 mip load/store chain.  You
wanted to know a bout disks:

     Date:  16 March 1981 22:54 est
     From:  Sibert (W. Olin Sibert)

     An MSU0451 has 814 cylinders, of 47 records each. Its average seek time
     is 25 ms. (I don't know whether that's track-to-track, 10 percent, or
     half platter -- I'll bet it's track-to-track, though). Its average
     rotational latency is 8.33 ms. Its transfer rate is about 690K 8bit
     bytes (614K 9bit bytes) per second, or 6.7 ms. per Multics record.
     [1024 words]

I cannot really think of benchmark possibilities that would
show the performance of Multics MacLisp to great advantage.
For all its virtual memory, the antiquated basic architecture
of the Honeywell 6000 series (from the GE600) provides a
hostile environment to the Lisp implementor.  Only one register
(AQ) capable of holding a full Lisp pointer exists, and this
same register is the only one you can calculate in, either.
Thus, the compiler can't do useful register optimization
or store-aviodance, and comes nowhere near NCOMPLR, which
is using the same techniques to implement the same language,
in the performance of its object code.
MacLisp type and array declarations are supported, and utilized
in the straightforward way by the compiler to improve generated code,
but in no way could it be claimed that what it generates is
competitive.

Multics MacLisp is "owned by MIT. It is distributed by MIT to anyone
who wants.  It is part of some Honeywell products [Emacs], and is
supported by Honeywell to the extent and only to the extent necessary
to keep these products operative. Honeywell can distribute it,
but may not charge for it, but may charge for products written it it".
Although its support is a current hot potato, interest in using
Multics Maclisp is continually growing, and interesting subsystems
in it are being developed as of this writing.

Anything else?